The Meta Philosophy of
Science
[From the author's Preface in Dr. Tom Van
Flandern's Dark Matter, Missing Planets and New Comets (1993;
2nd ed. 1999), available in the store at this site; updated
2002/05/05]:
I
began to form some hypotheses about what was wrong with these other
bodies of knowledge [outside astronomy], and why. I particularly
noted a regular practice of not re-examining the fundamental
assumptions underlying a theory once it gained "accepted" status,
almost no matter how incompatible some new observation or experiment
might be. And I saw powerful vested interests in a "status quo"
develop around certain accepted theories.
It
gradually became clear that a lot of people had a lot to lose if an
accepted theory or practice were challenged: the authors of the
original theory, whose names had become well-known; all those who
published papers which reference or depend on the theory; journal
editors and referees who have made decisions or criticized other
works based on a theory; funding agencies which have paid for
research which pre-supposes a theory; instrument builders and
experiment designers who spend career time testing ideas which
spring from a theory; journalists and writers whose publications
have featured or promoted a theory; teachers and interested members
of the public who have learned a theory, been impressed by the
wonder of it, and who have no wish to have to teach or learn a new
theory; and students, who need to find a job in their field of
training.
It
has been my sad observation that by mid-career there are very few
professionals left truly working for the advancement of science, as
opposed to the advancement of self. And given enough people with
strong enough interests, professional peer pressure takes over from
there. Peer pressure in science, as elsewhere in society, consists
of alternately attacking and ignoring the people who advocate a
contrary idea, and discrediting their motives and/or competence, in
order to achieve conformity. Even when it is not effective directly,
it is usually successful at ensuring that the contrary person or
idea gains few allies, and remains isolated. In short, those who may
suspect the need for a radical change in an accepted theory have no
interests or motivations as strong as those supporting the status
quo. And members of the former group usually lack the background and
confidence to challenge the latter group, who are the "recognized
experts" in the field and well-able to defend their own
theories.
As if there
weren't already enough inertia to major changes of models, I see yet
another phenomenon -- new to our era of rapid progress in science --
which militates against change even in the face of overwhelming need
for it. Few scientists consider themselves qualified very far
outside their own areas of expertise. Since each expert can account
for only a small portion of the data dealing with a model, he defers
to the other experts to support the model in other areas. Few, if
any, scientists have the breadth of knowledge to see the full
picture for a given model. So the model remains supported because
many individual authorities support it, none of whom have the
expertise to criticize the model overall, and all of whom have the
utmost confidence in the others collectively. Authorities can
continue to multiply indefinitely, with no one taking responsibility
for integrating all their combined knowledge. As a result, the
existing models get perpetuated regardless of merit or the extent of
counter-evidence, because "so many experts can't all be wrong." Thus
each expert is persuaded to force-fit his own data into the accepted
model, oblivious that the others are doing the same.
However, I had learned by then to start being more open-minded
toward new ideas, no longer dismissing them out of hand without
strong enough reason that even the idea's proposer could understand.
Whereas before it was rarely "worth my time" to deal with proposed
new ideas, I now felt quite the opposite. This was chiefly because
even in the process of proving that a new idea was false, I learned
a great deal about the fundamentals underlying the challenged
theory. I came to see the soft underbelly of many theories with a
tough outer shell. I found a lot of unsuspected
weaknesses.
The
first challenging new idea which I entertained as seriously viable
was P.A.M. Dirac's proposal of the variability of the universal
gravitational "constant." I performed a test of the idea using
observations of the Moon's orbital motion around the Earth, and
obtained results which supported Dirac's theory and seemed to be
statistically significant. This experience led me to realize how
fragile were the assumptions underlying the Big Bang and other
theories of cosmology, when even the constancy of gravitation, the
most important force in shaping the large-scale structure of the
universe, had been called into question. And I saw that very few of
my colleagues were taking seriously the idea that anything could be
wrong at such a fundamental level. Their attitude was
understandable, but unscientific.
From
my disturbing experiences with the insubstantiality of fundamentals
in other fields, I learned how I could sometimes spot the bad
accepted theories from a combination of their strangeness, a certain
lack of providing true insight into the underlying phenomena, and a
continuing need to do "maintenance theorizing" to patch the theory
in ever stranger ways as new data became available. I later added
"derivation from inductive reasoning" as additional grounds for
holding a theory suspect. Many of the accepted astronomical theories
in use today are "suspect" by all these criteria. I also learned how
to proceed when one encounters such a theory: Revert to the
principal investigative tools of science and scientists, by means of
which we try to separate good theories from bad ones.
These
are embodied in the Scientific Method, a process that involves
competitive testing of all ideas. Most scientists understand, at
least abstractly, the importance of testing. The part they have
forgotten, or were never taught because too many major theories in
too many fields would be called into question if it were, is
controls on testing. This is the step in which the test is
designed in such a way that the expected outcome, also called the
“bias of the experimenter”, cannot influence the actual
outcome. Instead, it has become common practice to question or
challenge data that leads to an unexpected outcome while not even
checking data or procedures that give the expected result. Even more
common is an ad hoc patch to the idea being tested to accommodate
the outcome. Naturally, such a patch completely invalidates the
test, and requires some independent test with new data. But all too
commonly, the result of the original test is cited as evidence
supporting the patched idea. Such is the state of mainstream science
today.
2002/05/05 |